UVeQFed: Universal Vector Quantization for Federated Learning

نویسندگان

چکیده

Traditional deep learning models are trained at a centralized server using data samples collected from users. Such often include private information, which the users may not be willing to share. Federated (FL) is an emerging approach train such without requiring share their data. FL consists of iterative procedure, where in each iteration copy model locally. The then collects individual updates and aggregates them into global model. A major challenge that arises this method need user repeatedly transmit its learned over throughput limited uplink channel. In work, we tackle tools quantization theory. particular, identify unique characteristics associated with conveying rate-constrained channels, propose suitable scheme for settings, referred as universal vector (UVeQFed). We show combining methods yields decentralized training system compression induces only minimum distortion. theoretically analyze distortion, showing it vanishes number grows. also characterize how conventional federated averaging combined UVeQFed converge minimizes loss function. Our numerical results demonstrate gains previously proposed terms both distortion induced accuracy resulting aggregated

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generalized Learning Vector Quantization

We propose a new learning method, "Generalized Learning Vector Quantization (GLVQ)," in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and ...

متن کامل

A vector quantization approach to universal noiseless coding and quantization

A two-stage code is a block code in which each block of data is coded in two stages: the first stage codes the identity of a block code among a collection of codes, and the second stage codes the data using the identified code. The collection of codes may be noiseless codes, fixed-rate quantizers, or variable-rate quantizers. We take a vector quantization approach to two-stage coding, in which ...

متن کامل

One-pass adaptive universal vector quantization

We introduce a one-pass adaptive universal quantization technique for real, bounded alphabet, stationary sources. The algorithm is set on line without any prior knowledge of the statistics of the sources which i t might encounter and asymptotically achieves ideal performance on all sources that i t sees. The system consists of an encoder and a decoder. At increasing intervals, the encoder refin...

متن کامل

Patch Processing for Relational Learning Vector Quantization

Recently, an extension of popular learning vector quantization (LVQ) to general dissimilarity data has been proposed, relational generalized LVQ (RGLVQ) [10, 9]. An intuitive prototype based classification scheme results which can divide data characterized by pairwise dissimilarities into priorly given categories. However, the technique relies on the full dissimilarity matrix and, thus, has squ...

متن کامل

Local Rejection Strategies for Learning Vector Quantization

Classification with rejection is well understood for classifiers which provide explicit class probabilities. The situation is more complicated for popular deterministic classifiers such as learning vector quantisation schemes: albeit reject options using simple distance-based geometric measures were proposed [4], their local scaling behaviour is unclear for complex problems. Here, we propose a ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Signal Processing

سال: 2021

ISSN: ['1053-587X', '1941-0476']

DOI: https://doi.org/10.1109/tsp.2020.3046971